翻訳と辞書
Words near each other
・ Knickerbocker Historic District
・ Knickerbocker Holiday
・ Knickerbocker Holiday (film)
・ Knickerbocker Hospital
・ Knickerbocker Hotel
・ Knickerbocker Hotel (Los Angeles)
・ Knickerbocker Hotel (Milwaukee, Wisconsin)
・ Knickerbocker Ice Company
・ Knickerbocker Mansion
・ Knickerbocker News
・ Knickerbocker Rules
・ Knickerbocker Sailing Association
・ Knickerbocker Storm
・ Knickerbocker Theatre
・ Knickerbocker Theatre (Broadway)
Kneser–Ney smoothing
・ Kneser–Tits conjecture
・ Knesiyat Hasekhel
・ Knesselare
・ Knesses Chizkiyahu
・ Knesset
・ Knesset Channel
・ Knesset Christian Allies Caucus
・ Knesset Eliyahoo
・ Knesset Guard
・ Knesset Menorah
・ Knesset Yisrael
・ Knesseth Israel
・ Knesseth Israel Congregation (Birmingham, Alabama)
・ KNET


Dictionary Lists
翻訳と辞書 辞書検索 [ 開発暫定版 ]
スポンサード リンク

Kneser–Ney smoothing : ウィキペディア英語版
Kneser–Ney smoothing
Kneser–Ney smoothing is a method primarily used to calculate the probability distribution of ''n''-grams in a document based on their histories.〔('A Bayesian Interpretation of Interpolated Kneser-Ney NUS School of Computing Technical Report TRA2/06' )〕 It is widely considered the most effective method of smoothing due to its use of absolute discounting by subtracting a fixed value from the probability's lower order terms to omit ''n''-grams with lower frequencies. This approach has been considered equally effective for both higher and lower order ''n''-grams.
A common example that illustrates the concept behind this method is the frequency of the bigram "San Francisco". If it appears several times in a training corpus, the frequency of the unigram "Francisco" will also be high. Relying on only the unigram frequency to predict the frequencies of ''n''-grams leads to skewed results;〔('Brown University: Introduction to Computational Linguistics ' )〕 however, Kneser–Ney smoothing corrects this by considering the frequency of the unigram in relation to possible words preceding it.
==Method==
The equation for bigram probabilities are as follows:

p_(w_i|w_) = \frac) - \delta,0)},w')} + p_(w_i)
〔('Kneser Ney Smoothing Explained' )〕
This equation can be extended to n-grams:

p_(w_i|w_^) = \frac,w_) - \delta,0)}^)} + \frac^)} (c(w_^))p_(w_i|w_^)
〔('NLP Tutorial: Smoothing' )〕
The probability p_(w_i) for a single word is the number of times it appears after any other word divided by the number of words in the corpus, which is represented by the function c(w_,w_). The parameter \delta is a constant which denotes the discount value subtracted from the count of each n-gram, usually between 0 and 1. This model uses the concept of absolute-discounting interpolation which incorporates information from higher and lower order language models. The addition of the term for lower order n-grams adds more weight to the overall probability when the count for the higher order n-grams is zero.〔('An empirical study of smoothing techniques for language modeling' )〕 Similarly, the weight of the lower order model decreases when the count of the n-gram is non zero.

抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)
ウィキペディアで「Kneser–Ney smoothing」の詳細全文を読む



スポンサード リンク
翻訳と辞書 : 翻訳のためのインターネットリソース

Copyright(C) kotoba.ne.jp 1997-2016. All Rights Reserved.